Simulated reality is the proposition that reality could be simulated—perhaps by computer simulation—to a degree indistinguishable from "true" reality. It could contain conscious minds which may or may not be fully aware that they are living inside a simulation.
This is quite different from the current, technologically achievable concept of virtual reality. Virtual reality is easily distinguished from the experience of actuality; participants are never in doubt about the nature of what they experience. Simulated reality, by contrast, would be hard or impossible to separate from "true" reality.
There has been much debate over this topic, ranging from philosophical discourse to practical applications in computing.
Contents |
In brain-computer interface simulations, each participant enters from outside, directly connecting their brain to the simulation computer. The computer transmits sensory data to the participant, reads and responds to their desires and actions in return; in this manner they interact with the simulated world and receive feedback from it. The participant may be induced by any number of possible means to forget, temporarily or otherwise, that they are inside a virtual realm (e.g. "passing through the veil", a term borrowed from Christian tradition, which describes the passage of a soul from an earthly body to an afterlife). While inside the simulation, the participant's consciousness is represented by an avatar, which can look very different from the participant's actual appearance.
In a virtual-people simulation, every inhabitant is a native of the simulated world. They do not have a "real" body in the external reality of the physical world. Instead, each is a fully simulated entity, possessing an appropriate level of consciousness that is implemented using the simulation's own logic (i.e. using its own physics). As such, they could be downloaded from one simulation to another, or even archived and resurrected at a later time. It is also possible that a simulated entity could be moved out of the simulation entirely by means of mind transfer into a synthetic body. Another way of moving an inhabitant of the virtual reality out of its simulation would be to "clone" the entity, by taking a sample of its virtual DNA and create a real-world counterpart from that model, assuming the real world's physics is compatible with the virtual world's. The result would not bring the "mind" of the entity out of its simulation, but its body would be born in the real world. In The Matrix Reloaded, a variant of this is played out. Former Agent Smith takes over the body of a "redpill" inside the Matrix, causing the person's avatar within to look like Smith. When that Smith uses one of the hard exits to return to the outside (real) world, the body of the "redpill" looks the same to the people around him, but he is in reality a version of Smith.
This category subdivides into two further types:
In an emigration simulation, the participant enters the simulation from the outer reality, as in the brain-computer interface simulation, but to a much greater degree. On entry, the participant could use a variety of hypothetical methods to participate in the simulated reality including mind transfer to temporarily relocate their mental processing into a virtual-person. After the simulation is over, the participant's mind is restored along with all new memories and experience gained within (as in the movie The Thirteenth Floor, or when one flatlines in Neuromancer).
Further, there is the option (also from The Thirteenth Floor) of a completely virtual-person, born in the simulation, willing to escape the simulation (after "waking up") and consequently somehow succeeding to be transferred into an outer-reality person. This would ultimately mean exiting (emigrating) and getting transformed on exit into a "real" person. In this particular case, since the emigrating inhabitant of the simulation didn't have an associated outer-reality person (user with a "real body"), this virtual person would be transferred into either a "new-born" outer-reality person (assuming that possible), or an already existing/living one, whether being a "player" of the simulation or not at all. And if being a player, he would be previously associated with some other inhabitant from the simulated world and thus with "taking over" (or merging with) this "special" previous-inhabitant that is emigrating, he could choose to destroy that other/old inhabitant, or abandon him (leaving him in the simulated world without a user/player temporarily or permanently). Or if neither destroying or abandoning, but willing to further "play" the simulation and choosing to play that same old inhabitant (that didn't emigrate), he would do that now as a transformed user ("enriched" with an emigrated virtual-person, or now even completely being that previously virtual person, if that was chosen and possible, and as such continuing to play the simulation using another virtual-person).
Finally, there is the option of a simulated reality being dynamically constructed and modified using real-world matter and energy within an enclosing container or room, such as the "Holodeck" in Star Trek. Upon entering such a space, the real-world person would effectively feel immersed in the simulated environment, with a variety of potential methods being used to convince the user of the presence of motion, gravity, environments, and so on, and with the user presumably able to interact (or not) with the simulated reality.
An intermingled simulation supports both types of consciousness: "players" from the outer reality who are visiting (as a brain-computer interface simulation) or emigrating, and virtual-people who are natives of the simulation and hence lack any physical body in the outer reality.
The Matrix movies feature an intermingled type of simulation: they contain not only human minds (with their physical bodies remaining outside), but also sentient software programs that govern various aspects of the computed realm.
Ten years after Hans Moravec first published the simulation argument (and three years after its update in Moravec's second full pop science book),[1] the philosopher Nick Bostrom investigated the possibility that we may be living in a simulation.[2] A simplified version of his argument proceeds as such:
Then the ultimate question is — if one accepts that the above premises are at least possible — which of the following is more likely?
In greater detail, his argument attempts to prove the trichotomy, either that:
Bostrom's argument uses the premise that given sufficiently advanced technology, it is possible to simulate entire inhabited planets or even larger habitats or even entire universes as quantum simulations in time/space pockets, including all the people on them, on a computer, and that simulated people can be fully conscious, and are as fully sentient individuals as non-simulated people.
A particular case provided in the original paper poses the scenario where we reason based on the trichotomy listed above. We deny the first hypothesis: We assume that the human race could reach such a technologically advanced level without destroying themselves in the process. We then deny the second hypothesis: We presume that once we reached such a level we would still be interested in history, the past, and our ancestors, and that there would be no legal or moral strictures on running such simulations. If these two assumptions are made, then
Assumptions as to whether the human race (or another intelligent species) could reach such a technological level without destroying themselves depend greatly on the value of the Drake equation, which attempts to calculate the number of intelligent technological species communicating via radio in a galaxy at any given point in time. The expanded equation looks to the number of posthuman civilizations that ever would exist in any given universe. If the average for all universes, real or simulated, is greater than or equal to one such civilization existing in each universe's entire history, then the odds are rather overwhelmingly in favor of the proposition that the average civilization is in a simulation, assuming that such simulated universes are possible and such civilizations would want to run such simulations.
As to the question of whether we are living in a simulated reality or a 'real' one, the answer may be 'indistinguishable', in principle. In a commemorative article dedicated to the 'The World Year of Physics 2005', physicist Bin-Guang Ma proposed the theory of 'Relativity of reality' [3] (though this notion has been suggested in other contexts like ancient philosophy (Zhuangzi's 'Butterfly Dream') and psychologic analytics [4]). By generalizing the relativity principle in physics, which is mainly about the relativity of motion, stating that the motion has no absolute meaning (to say if something is in motion or rest, one must adopt some reference frame; without a reference frame, one cannot tell the state of being in rest or in uniform motion), a similar property has been suggested for reality, meaning that without a reference world, one cannot tell the world one is living in is real or a simulated one. Therefore, there is no absolute meaning for reality. Similar to the situation in Einstein's relativity, there are two fundamental principles for the theory 'Relativity of reality'.
The first principle ('equally real') says that all worlds are equal in reality, even for partially simulated worlds (if there are living beings, they feel the same level of reality just as we feel). In this theory, the question "whether are we living in a simulated reality or a 'real' one" is meaningless, because they are indistinguishable in principle. The 'equally real principle' doesn't mean that we cannot differentiate a concrete computer simulation from our own world, since when we are talking about a computer simulation, we already have a reference world (the world we are in).
Coupled with the second principle ('coexistence'), the space-time transformation between two across-reality objects (one is in real world and the other is in virtual world) was supposed in this theory, which is an example of interreality (mixed reality) system. The first 'interreality physics' experiment may be the one conducted by V. Gintautas and A. W. Hubler, where a mixed-reality correlation between two pendula (one is real and the other is virtual) was indeed observed.[5]
Computationalism is a philosophy of mind theory stating that cognition is a form of computation. It is relevant to the Simulation Hypothesis in that it illustrates how a simulation could contain conscious subjects, as required by a "virtual people" simulation. For example, it is well known that physical systems can be simulated to some degree of accuracy. If computationalism is correct, and if there is no problem in generating artificial consciousness from cognition, it would establish the theoretical possibility of a simulated reality. However, the relationship between cognition and phenomenal consciousness is disputed. It is possible that consciousness requires a physical substrate not provided by a computational simulator, and simulated people, while behaving appropriately, would be philosophical zombies. This would also seem to negate Nick Bostrom's simulation argument; we cannot be inside a simulation, as conscious beings, if consciousness cannot be simulated. However, we could still be within a simulation, and yet be envatted brains. This would allow us to exist as conscious beings within a simulated environment, even if a simulated environment could not simulate consciousness.
Some theorists[6][7] have argued that if the "consciousness-is-computation" version of computationalism and mathematical realism (also known as mathematical Platonism) are both true our consciousnesses must be inside a simulation. This argument states that a "Plato's heaven" or ultimate ensemble would contain every algorithm, including those which implement consciousness. Platonic simulation theories are also subsets of the multiverse theories and theories of everything.
A dream could be considered a type of simulation capable of fooling someone who is asleep. As a result the "dream hypothesis" cannot be ruled out, although it has been argued that common sense and considerations of simplicity rule against it.[8] One of the first philosophers to question the distinction between reality and dreams was Zhuangzi, a Chinese philosopher from the 4th century BC. He phrased the problem as the well-known "Butterfly Dream," which went as follows:
Once Zhuangzi dreamt he was a butterfly, a butterfly flitting and fluttering around, happy with himself and doing as he pleased. He didn't know he was Zhuangzi. Suddenly he woke up and there he was, solid and unmistakable Zhuangzi. But he didn't know if he was Zhuangzi who had dreamt he was a butterfly, or a butterfly dreaming he was Zhuangzi. Between Zhuangzi and a butterfly there must be some distinction! This is called the Transformation of Things. (2, tr. Burton Watson 1968:49)
The philosophical underpinnings of this argument are also brought up by Descartes, who was one of the first Western philosophers to do so. In Meditations on First Philosophy, he states "... there are no certain indications by which we may clearly distinguish wakefulness from sleep",[9] and goes on to conclude that "It is possible that I am dreaming right now and that all of my perceptions are false".[9]
Chalmers (2003) discusses the dream hypothesis, and notes that this comes in two distinct forms:
Both the dream argument and the Simulation hypothesis can be regarded as skeptical hypotheses; however in raising these doubts, just as Descartes noted that his own thinking led him to be convinced of his own existence, the existence of the argument itself is testament to the possibility of its own truth.
Another state of mind in which an individual's perceptions have no physical basis in the real world is called psychosis. Psychosis may have a physical basis in the real world, explanations vary.
A decisive refutation of any claim that our reality is computer-simulated would be the discovery of some uncomputable physics, because if reality is doing something that no computer can do, it cannot be a computer simulation. (Computability generally means computability by a Turing machine. Hypercomputation (super-Turing computation) introduces other possibilities which will be dealt with separately). In fact, known physics is held to be (Turing) computable,[11] but the statement "physics is computable" needs to be qualified in various ways. Before symbolic computation, a number, thinking particularly of a real number, one with an infinite number of digits, was said to be computable if a Turing machine will continue to spit out digits endlessly, never reaching a "final digit".[12] This runs counter, however, to the idea of simulating physics in real time (or any plausible kind of time). Known physical laws (including those of quantum mechanics) are very much infused with real numbers and continua, and the universe seems to be able to decide their values on a moment-by-moment basis. As Richard Feynman put it:[13]
"It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of space/time is going to do? So I have often made the hypotheses that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities".
The objection could be made that the simulation does not have to run in "real time".[14] It misses an important point, though: the shortfall is not linear; rather it is a matter of performing an infinite number of computational steps in a finite time.[15]
Note that these objections all relate to the idea of reality being exactly simulated. Ordinary computer simulations as used by physicists are always approximations.
These objections do not apply if the hypothetical simulation is being run on a hypercomputer, a hypothetical machine more powerful than a Turing machine.[16] Unfortunately, there is no way of working out if computers running a simulation are capable of doing things that computers in the simulation cannot do. No-one has shown that the laws of physics inside a simulation and those outside it have to be the same, and simulations of different physical laws have been constructed.[17] The problem now is that there is no evidence that can conceivably be produced to show that the universe is not any kind of computer, making the simulation hypothesis unfalsifiable and therefore scientifically unacceptable, at least by Popperian standards.[18]
All conventional computers, however, are less than hypercomputational, and the simulated reality hypothesis is usually expressed in terms of conventional computers, i.e. Turing machines. Inasmuch as they are, the hypothesis is falsifiable.
Roger Penrose, an English mathematical physicist, presents the argument that human consciousness is non-algorithmic, and thus is not capable of being modeled by a conventional Turing machine-type of digital computer. Penrose hypothesizes that quantum mechanics plays an essential role in the understanding of human consciousness. The collapse of the quantum wavefunction is seen as playing an important role in brain function.
In his book The Fabric of Reality, David Deutsch discusses how the limits to computability imposed by Gödel's Incompleteness Theorem affects the Virtual Reality rendering process. In order to do this, Deutsch invents the notion of a CantGoTu environment (named after Cantor, Gödel, and Turing), using Cantor's diagonal argument to construct an 'impossible' Virtual Reality which a physical VR generator would not be able to generate. The way that this works is to imagine that all VR environments renderable by such a generator can be enumerated, and that we label them VR1, VR2, etc. Slicing time up into discrete chunks we can create an environment which is unlike VR1 in the first timeslice, unlike VR2 in the second timeslice and so on. This environment is not in the list, and so it cannot be generated by the VR generator. Deutsch then goes on to discuss a universal VR generator, which as a physical device would not be able to render all possible environments, but would be able to render those environments which can be rendered by all other physical VR generators. He argues that 'an environment which can be rendered' corresponds to a set of mathematical questions whose answers can be calculated, and discusses various forms of the Turing Principle, which in its initial form refers to the fact that it is possible to build a universal computer which can be programmed to execute any computation that any other machine can do. Attempts to capture the process of virtual reality rendering provides us with a version which states: "It is possible to build a virtual-reality generator, whose repertoire includes every physically possible environment". In other words, a single, buildable physical object can mimic all the behaviours and responses of any other physically possible process or object. This, it is claimed, is what makes reality comprehensible.
Later on in the book, Deutsch goes on to argue for a very strong version of the Turing principle, namely: "It is possible to build a virtual reality generator whose repertoire includes every physically possible environment." However, in order to include every physically possible environment, the computer would have to be able to include a full simulation of the environment containing itself. Even so, a computer running a simulation need not have to run every possible physical moment to be plausible to its inhabitants.
The computational requirements for molecular dynamics are such that in 2002, "while the fastest proteins fold on the order of tens of microseconds", "current single computer processors" could "only simulate on the order of a nanosecond of real-time of folding in full atomic detail per CPU day".[19][20] To simulate an entire galaxy would require more computing power than can presently be envisioned, assuming that no shortcuts are taken when simulating areas that nobody is observing.
In answer to this objection, Bostrom calculated that simulating the brain functions of all humans who have ever lived would require roughly 1033 to 1036 calculations.[2] He further calculated that a planet-sized computer built with computronium using known nanotechnological methods would perform about 1042 calculations per second — and a planet-sized computer or an even larger stellar system-sized computer is not inherently impossible to build, (although the speed of light could severely constrain the speed at which its subprocessors share data). In any case, a simulation need not compute every single molecular event that occurs inside it; it may only process events that its participants can actively perceive. This is particularly the case if the simulation contained only a handful of people; far less processing power would be needed to make them believe they were in a "world" much larger than was actually the case. A real world example of this could be the observer paradox or Heisenberg Uncertainty Principle - an unobserved region of space is indeterminate until observed - this could be because the simulating computer is not simulating it until it needs to.
The existence of simulated reality is unprovable in any concrete sense: any "evidence" that is directly observed could be another simulation itself. In other words, there is an infinite regress problem with the argument. Even if we are a simulated reality, there is no way to be sure the beings running the simulation are not themselves a simulation, and the operators of that simulation are not a simulation, ad infinitum. Given the premises of the simulation argument, any reality, even one running a simulation, has no better or worse a chance of being a simulation than any other.
It is perhaps erroneous to apply our current sense of feasibility to projects undertaken in an outer reality, where resources and physical laws may be very different. It also assumes designers would need to simulate reality beyond our natural senses.
Also, a simulated reality need not run in real time addressing computational constraints. The inhabitants of a simulated universe would have no way of knowing if one day of subjective time actually required much longer to calculate in their host computer, or vice-versa, or if the simulation is run in pieces on different computers, or with a million generations of monks working weekends on abacuses — all without the simulation missing a beat 'in simulation time'.
A computed simulation may have voids or other errors that manifest inside. As a simple example of this, when the "hall of mirrors" effect occurs in the first person shooter Doom, the game attempts to display "nothing" and obviously fails in its attempt to do so. If a void can be found and tested, and if the observers survive its discovery, then it may reveal the underlying computational substrate. However, lapses in physical law could be attributed to other explanations, for instance inherent instability in the nature of reality.
In fact, bugs could be very common. An interesting question is whether knowledge of bugs or loopholes in a sufficiently powerful simulation are instantly erased the minute they are observed since presumably all thoughts and experiences in a simulated world could be carefully monitored and altered. This would, however, require enormous processing capability in order to simultaneously monitor billions of people at once. Of course, if this is the case we would never be able to act on discovery of bugs. In fact, any simulation significantly determined to protect its existence could erase any proof that it was a simulation whenever it arose, provided it had the enormous capacity necessary to do so.
To take this argument to an even greater extreme, a sufficiently powerful simulation could make its inhabitants think that erasing proof of its existence is difficult. This would mean that the computer actually has an easy time of erasing glitches, but we all think that changing reality requires great power. One could possibly take miracles and paranormal activity as software bugs especially those which seem to have a negative effect on one; this notion has been explored in The Matrix, where déjà vu is considered a sign of crude alteration to the system; and Animatrix where software glitches are concentrated in a house which the neighbors call "haunted", subsequently corrected by the Agents. A possible exploit could regard demons and evil spirits as the 'hackers' who attempt to take advantage of this system.
Additionally, it can be argued that what are in fact errors in the software, we perceive as part of the "proper" reality. For example, it may be the case that tornadoes were never meant to exist in this simulation, but due to an error in the programming came to be. It would then be only suspicious to remove them from this reality and doing so would raise more questions by its inhabitants. In such instance, it would make more sense to leave the "error" in place.
The simulation may contain hidden/secret messages or exits placed there by the designer or by other inhabitants who have solved the riddle in the way that easter eggs in computer games and other media sometimes do. People have already spent considerable effort searching for patterns or messages within the endless decimal places of the fundamental constants such as e and pi. In Carl Sagan's science fiction novel Contact, Sagan contemplates the possibility of finding a signature embedded in pi (in its base-11 expansion) by the creators of our reality.
However, such messages have not been made public if they have been found, and the argument relies on the messages being truthful. As usual, other hypotheses could explain the same evidence. In any case, if such constants are in fact normal, then at some point an apparently meaningful message will appear in them (this is known as the infinite monkey theorem), not necessarily because it was placed there.
The Easter Egg Theory also assumes that a simulation would want to inform its inhabitants of its real nature; it may not. Otherwise, if we consider that the human race will eventually be capable of creating intelligent programs (i.e. machines) living inside a virtual subspace of our "real" world, then an interesting question would be to define whether or not we will be capable of suppressing from our sentient robots their capability of knowing their artificial nature (see Philip K Dick's Do Androids Dream of Electric Sheep?).
A computer simulation would be limited to the processing power of its host computer, and so there may be aspects of the simulation that are not computed at a fine-grained (e.g. subatomic) level. This might show up as a limitation on the accuracy of information that can be obtained in particle physics.
However, this argument, like many others, assumes that accurate judgments about the simulating computer can be made from within the simulation. If we are being simulated, we might be misled about the nature of computers.
Taken one step further, the "fine grained" elements of our world could themselves be simulated since we never see the sub-atomic particles due to our inherent physical limitations. In order to see such particles we rely on other instruments which appear to magnify or translate that information into a format our limited senses are able to view: computer print out, lens of a microscope, etc. Therefore, we essentially take on faith that they're an accurate portrayal of the fine grained world which appears to exist in a realm beyond our natural senses. Assuming the sub-atomic could also be simulated then the processing power required to generate a realistic world would be greatly reduced.
In theoretical physics, digital physics holds the basic premise that the entire history of our universe is computable in some sense. The hypothesis was pioneered in Konrad Zuse's book Rechnender Raum (translated by MIT into English as Calculating Space, 1970), which focuses on cellular automata. Juergen Schmidhuber suggested that the universe could be a Turing machine, because there is a very short program that outputs all possible programmes in an asymptotically optimal way. Other proponents include Edward Fredkin, Stephen Wolfram, and Nobel laureate Gerard 't Hooft. They hold that the apparently probabilistic nature of quantum physics is not incompatible with the notion of computability. A quantum version of digital physics has recently been proposed by Seth Lloyd. None of these suggestions has been developed into a workable physical theory.
It can be argued that the use of continua in physics constitutes a possible argument against the simulation of a physical universe. Removing the real numbers and uncountable infinities from physics would counter some of the objections noted above, and at least make computer simulation a possibility. However, digital physics must overcome these objections. For instance, cellular automata would appear to be a poor model for the non-locality of quantum mechanics.
Some of the people in a simulated reality may be automatons, philosophical zombies, or 'bots' added to the simulation to make it more realistic or interesting or challenging. Indeed, it is conceivable that every person other than oneself is a bot. Bostrom called this a "me-simulation", in which oneself is the only sovereign lifeform, or at least the only inhabitant who entered the simulation from outside.
Bostrom further elaborated on the idea of bots:
In addition to ancestor-simulations, one may also consider the possibility of more selective simulations that include only a small group of humans or a single individual. The rest of humanity would then be zombies or "shadow-people" – humans simulated only at a level sufficient for the fully simulated people not to notice anything suspicious. It is not clear how much [computationally] cheaper shadow-people would be to simulate than real people. It is not even obvious that it is possible for an entity to behave indistinguishably from a real human and yet lack conscious experience.[2]
The idea of "zombies" has a well known corollary in the video game industry where computer generated characters are known as Non-Player Characters ("NPCs"). The term 'bots' is short for 'robots'. The usage originated as the name given to the simple AI opponents of modern video games.
A brain-computer interface simulated reality may be required to progress at a rate that is near realtime; that is, time within it may be required to pass at approximately the same rate as the outer reality which contains it. This might be the case because the players are interacting with the simulation using brains which still reside in the outer reality. Therefore, if the simulation were to run faster or slower, those brains could notice because they were not contained within it.
It is possible that time passes slower or quicker for brains in a dream state (i.e., in a brain-computer interface trance); however, the point is that they still function at a finite, biological speed, and the simulation must track with them. Unless those interacting with the simulation are augmented and capable of processing information at the same rate as the simulation itself.
A virtual-people or emigration simulated reality, on the other hand, need not. This is because its inhabitants are using the simulation's own physics in order to experience, think, and react. If the simulation were slowed down or sped up, so also would the inhabitants' own senses, brains, and muscles, as well as every other molecule inside. The inhabitants would perceive no change in the passage of time, simply because their method of measuring time is dependent on the cosmic clock that they are seeking to measure. (They could perform the measurement only if they had some access to data from the outer reality.)
For that matter, they could not even detect whether the simulation had been completely halted: a pause in the simulation would pause every life and mind within it. When the simulation was later resumed, the inhabitants would continue exactly as they were before the pause, completely unaware that (for example) their cosmos had been paused and archived for a billion years before being resumed. A simulation could also be created with its inhabitants already possessing memories as though they had already lived part of their lives before; said inhabitants would not be able to tell the difference unless informed of it by the simulation. (Compare with the five minute hypothesis and Last Thursdayism).
One practical implication of this is that a virtual-people or a hybrid simulation does not require a computer powerful enough to model its entire cosmos at full speed. Per the Turing completeness theorem, a simulation can progress at whatever speed its host computer can manage; it would be constrained by available memory but not by computation rate.
Recursive simulation involves a simulation, or an entity in a simulation, creating another simulation within a simulated environment.[21] The 'parent' simulator would be simulating all of the atoms of the computer, atoms which happen to be calculating a 'child' simulation. By way of illustration: in Fallout 3, Metal Gear Solid 2, and Xenosaga, the player character at one point must enter a virtual reality simulation in the game. Alternatively, imagine a Java Runtime Environment running a virtual computer on a "real-world" computer that itself is located within a simulation.
This recursion could continue to infinitely many levels — a simulation containing a computer running a simulation containing a computer running a simulation and so on. The recursion is subject only to one constraint (assuming no level has infinite computational power): each 'nested' simulation must be:
...and must be at least one of the following:
The latter is the basis of the idea that quantum uncertainties are circumstantial evidence that our own reality is a simulation. However, this assumes that there is a finite limitation somewhere in the chain. Assuming an infinite number of simulations within simulations, there need not be any noticeable difference between any of the subsets.
Simulated reality is a theme that pre-dates science fiction. In Medieval and Renaissance religious theatre, the concept of the world as theater is frequent.
Virtual reality, and to a lesser point simulated reality, are key facets in the Cyberpunk genre, regardless of format.
|
|